Current Issue : January - March Volume : 2015 Issue Number : 1 Articles : 5 Articles
Virtualized reality games offer highly interactive and engaging user experience and therefore game-based approaches (GBVR)\nmay have significant potential to enhance clinical rehabilitation practice as traditional therapeutic exercises are often repetitive\nand boring, reducing patient compliance. The aim of this study was to investigate if a rehabilitation training programme using\nGBVR could simultaneously improve both motor skill (MS) and confidence (CON), as they are both important determinants of\ndaily living and physical and social functioning. The study was performed using a nondominant hand motor deficit model in\nnonambidextrous healthy young adults, whereby dominant and nondominant arms acted as control and intervention conditions,\nrespectively. GBVR training was performed using a commercially available tennis-based game. CON and MS were assessed by\nhaving each subject perform a comparable real-world motor task (RWMT) before and after training. Baseline CON and MS for\nperforming the RWMTwere significantly lower for the nondominant hand and improved after GBVR training, whereas there were\nno changes in the dominant (control) arm. These results demonstrate that by using a GBVR approach to address a MS deficit in a\nreal-world task, improvements in both MS and CON can be facilitated and such approaches may help increase patient compliance...
Making eye contact is a most important prerequisite function of humans to initiate a conversation with others. However, it is not\nan easy task for a robot to make eye contact with a human if they are not facing each other initially or the human is intensely\nengaged his/her task. If the robot would like to start communication with a particular person, it should turn its gaze to that person\nand make eye contact with him/her. However, such a turning action alone is not enough to set up an eye contact phenomenon\nin all cases. Therefore, the robot should perform some stronger actions in some situations so that it can attract the target person\nbefore meeting his/her gaze. In this paper, we proposed a conceptual model of eye contact for social robots consisting of two phases:\ncapturing attention and ensuring the attention capture. Evaluation experiments with human participants reveal the effectiveness\nof the proposed model in four viewing situations, namely, central field of view, near peripheral field of view, far peripheral field of\nview, and out of field of view....
Even if their spatial reasoning capabilities remain quite similar to those of sighted people, blind people encounter difficulties in\ngetting distant information from their surroundings. Thus, whole body displacements, tactile map consultations, or auditory solutions\nare needed to establish physical contacts with their environment. Therefore, the accuracy of nonvisual spatial representations\nheavily relies upon the efficiency of exploration strategies and the ability to coordinate egocentric and allocentric spatial frames\nof reference. This study aims to better understand the mechanisms of this coordination without vision by analyzing cartographic\nexploration strategies and assessing their influence on mental spatial representations. Six blind sailors were immersed within a\nvirtual haptic and auditory maritime environment. They were required to learn the layout of the map. Their movements were\nrecorded and we identified some exploration strategies. Then they had to estimate the directions of six particular seamarks in\naligned and misaligned situations. Better accuracy and coordination were obtained when participants used the ââ?¬Å?central point of\nreferenceââ?¬Â strategy. Our discussion relative to the articulation between geometric enduring representations and salient transient\nperceptions provides implications on map reading techniques and on mobility and orientation programs for blind people....
Segmenting human hand is important in computer vision applications, for example, sign language interpretation, human computer\ninteraction, and gesture recognition. However, some serious bottlenecks still exist in hand localization systems such as fast hand\nmotion capture, hand over face, and hand occlusions on which we focus in this paper.We present a novel method for hand tracking\nand segmentation based on augmented graph cuts and dynamic model. First, an effective dynamic model for state estimation is\ngenerated, which correctly predicts the location of hands probably having fast motion or shape deformations. Second, new energy\nterms are brought into the energy function to develop augmented graph cuts based on some cues, namely, spatial information,\nhand motion, and chamfer distance. The proposed method successfully achieves hand segmentation even though the hand passes\nover other skin-colored objects. Some challenging videos are provided in the case of hand over face, hand occlusions, dynamic\nbackground, and fast motion. Experimental results demonstrate that the proposed method is much more accurate than other graph\ncuts-based methods for hand tracking and segmentation....
Motivated by the differences between human and robot teams, we investigated the role of verbal communication between human\nteammates as they work together to move a large object to a series of target locations. Only one member of the group was\ntold the target sequence by the experimenters, while the second teammate had no target knowledge. The two experimental\nconditions we compared were haptic-verbal (teammates are allowed to talk) and haptic only (no talking allowed). The team�s\ntrajectory was recorded and evaluated. In addition, participants completed a NASA TLX-style postexperimental survey which\ngauges workload along 6 different dimensions. In our initial experiment we found no significant difference in performance when\nverbal communication was added. In a follow-up experiment, using a different manipulation task, we did find that the addition\nof verbal communication significantly improved performance and reduced the perceived workload. In both experiments, for the\nhaptic-only condition, we found that a remarkable number of groups independently improvised common haptic communication\nprotocols (CHIPs). We speculate that such protocols can be substituted for verbal communication and that the performance\ndifference between verbal and nonverbal communication may be related to how easy it is to distinguish the CHIPs from motions\nrequired for task completion....
Loading....